I have set echo=FALSE so that most of the code chunks will not display. Please refer to the .Rmd file for source code.

1 Importing data

Here we import our data and make some summary plots.

1.1 EEG Data

Importing the primary EEG data set. This is odd-harmonic filtered data from region-of-interest consisting of six electrodes over occipital cortex.

RMS data from two of the wallpaper groups, P2 and P4M. Odd harmonics are shown in A and B, while even harmonic data are shown in C and D, and occipital and parietal regions of interest are shown in dark nad light gray, respectively. The two groups elicit very different response amplitudes for odd harmonics over occipital cortex, but for even harmonics those differences are much less pronounced.

Figure 1.1: RMS data from two of the wallpaper groups, P2 and P4M. Odd harmonics are shown in A and B, while even harmonic data are shown in C and D, and occipital and parietal regions of interest are shown in dark nad light gray, respectively. The two groups elicit very different response amplitudes for odd harmonics over occipital cortex, but for even harmonics those differences are much less pronounced.

We now import the data. We have three variables: wallpaper group (wg), subject, and root-mean-squared amplitude (rms).

## Rows: 400
## Columns: 3
## $ wg      <chr> "P2", "PM", "PG", "CM", "PMM", "PMG", "PGG", "CMM", "P4"…
## $ subject <chr> "s01", "s01", "s01", "s01", "s01", "s01", "s01", "s01", …
## $ rms     <dbl> 0.4013, 0.6555, 0.5547, 0.7635, 0.9185, 0.7285, 0.4320, …

If we plot the distribution of rms, we can clearly see that it is skewed. Furthermore, as negative rms amplitudes are impossible we will use a lognormal distribution to model these data.

Data are the root-mean-squaured (rms) amplitudes over the odd-harmonic filtered waveforms.

Figure 1.2: Data are the root-mean-squaured (rms) amplitudes over the odd-harmonic filtered waveforms.

1.2 Threshold Data

Here we import that data and select the columns that we’re interested in. Threshold gives the required display duration (in seconds) for the two stimuli to allow for accurate discrimination.

## Rows: 186
## Columns: 3
## $ subject   <chr> "person10", "person10", "person10", "person10", "perso…
## $ wg        <chr> "CM", "CMM", "P2", "P3", "P31M", "P3M1", "P4", "P4G", …
## $ threshold <dbl> 0.74125, 0.20216, 0.47697, 0.35012, 0.24529, 0.19022, …

As above, a summary of the data. And again, we have a skewed distribution, (with negative display durations being impossible), so we will use also use a lognormal distribution to model the behaviour data.

Histogram of the display duration thresholds.

Figure 1.3: Histogram of the display duration thresholds.

The table below shows how many groups per participant we have data for.

subject count
person0 15
person10 16
person12 16
person13 16
person14 16
person15 16
person16 16
person2 12
person3 15
person4 16
person55 16
person6 16

1.3 Control Data

In addition to the primary EEG data set, we are also importing two control data sets which are (a) even harmonic data from the same occipital electrodes, and (b) odd harmonic data from six parietal electrodes (see Figure 1.1 and the main paper).

1.4 Symmetry Information

Information on the symmetries and subgroups contained within each wallpaper group was obtained from and is summarised in the files symmeries_in_group.csv, subgroup_relations.csv, and subgroup_normal.csv.

1.4.1 Types of Symmetry

<A short note on what is counted…i.e., so the reader understands that P2 has one rotation of order 2, etc>

Table 1.1: wallpaper group summary
group rotation reflection glide
P2 2 0 0
PM 0 1 0
PG 0 0 1
CM 0 1 1
PMM 2 2 0
PMG 2 1 1
PGG 2 0 2
CMM 2 2 2
P4 4 0 0
P4M 4 4 2
P4G 4 2 4
P3 3 0 0
P3M1 3 3 3
P31M 3 3 3
P6 6 0 0
P6M 6 6 6

1.4.2 Subgroup Relationships

Import subgroup information and display a table of the relationships that we will be investigating. Relationships taken from Coxeter & Moser (1972).

Table 1.2: Summary of subgroup relationships. The numbers indicate the index of the subgroup, while italics indicate normal subgroups. Relationships written in yellow text are not included in our analysis.
subgroup P2 PG PM CM PGG PMG PMM CMM P4 P4G P4M P3 P3M1 P31M P6 P6M
P2 2 - - - - - - - - - - - - - - -
PG - 2 - - - - - - - - - - - - - -
PM - 2 2 2 - - - - - - - - - - - -
CM - 2 2 3 - - - - - - - - - - - -
PGG 2 2 - - 3 - - - - - - - - - - -
PMG 2 2 2 4 2 3 - - - - - - - - - -
PMM 2 4 2 4 4 2 2 2 - - - - - - - -
CMM 2 4 4 2 2 2 2 4 - - - - - - - -
P4 2 - - - - - - - 2 - - - - - - -
P4G 4 4 8 4 2 4 4 2 2 9 - - - - - -
P4M 4 8 4 4 4 4 2 2 2 2 2 - - - - -
P3 - - - - - - - - - - - 3 - - - -
P3M1 - 6 6 3 - - - - - - - 2 4 3 - -
P31M - 6 6 3 - - - - - - - 2 3 4 - -
P6 3 - - - - - - - - - - 2 - - 4 -
P6M 6 12 12 6 6 6 6 3 - - - 4 2 2 2 3

We will remove identity relationships (i.e., a group is a subgroup of itself) and the three pairs of wallpapers groups that it subgroups of each other (e.g., PM is a subgroup of CM, and CM is a subgroup of PM). This leaves us with a total of 63 subgroups to include in our analysis.

2 Bayesian Analysis

Here are the details of the Bayesian multi-level modelling. Our general approach is:

2.1 Define Priors

In this section we will specify some priors. We then then use a prior-predictive check to assess whether the prior is reasonable or not (i.e., on the same order of magnitude as our measurements).

2.1.1 Fixed Effects

Our independent variable is a categorical factor with 16 levels. We will drop the intercept from our model and instead fit a coefficent for each factor level (\(y \sim x - 0\)). As our dependant variable will be log-transformed, we can use the priors below:

prior <- c(
  set_prior("normal(0,2)", class = "b"),    
  set_prior("cauchy(0,2)", class = "sigma"))

2.1.2 Group-level Effects

We will keep the default weakly informative priors for the group-level (‘random’) effects. From the brms documentation:

[…] restricted to be non-negative and, by default, have a half student-t prior with 3 degrees of freedom and a scale parameter that depends on the standard deviation of the response after applying the link function. Minimally, the scale parameter is 10. This prior is used (a) to be only very weakly informative in order to influence results as few as possible, while (b) providing at least some regularization to considerably improve convergence and sampling efficiency.

2.1.3 Prior Predictive Check

Now we can specify our Bayesian multi-level model and priors. Note that as we are using sample_prior = 'only', the model will not learn anything from our data.

m_prior <- brm(data = d_eeg, 
  rms ~ wg-1 + (1|subject),
  family = "lognormal", 
  prior = prior, 
  iter = n_iter ,
  sample_prior = 'only')

We can use this model to generate data.

## Rows: 320,000
## Columns: 2
## $ key   <chr> "P2", "P2", "P2", "P2", "P2", "P2", "P2", "P2", "P2", "P2"…
## $ value <dbl> 22.46961872, 1.56104632, 4.33755148, 0.44560732, 4.7814215…
The density plot shows the distribution of the empirical data, while the blue line shows the 66% and 95% prediction intervals.

Figure 2.1: The density plot shows the distribution of the empirical data, while the blue line shows the 66% and 95% prediction intervals.

We can see that i) our priors are relatively weak as the predictions span several orders of magnitide, and ii) our empirical data falls within this range.

2.2 Compute Posterior

2.2.1 Fit Model to EEG Data

We will now fit the model to the data.

##  Family: lognormal 
##   Links: mu = identity; sigma = identity 
## Formula: rms ~ wg - 1 + (1 | subject) 
##    Data: d_eeg (Number of observations: 400) 
## Samples: 4 chains, each with iter = 10000; warmup = 5000; thin = 1;
##          total post-warmup samples = 20000
## 
## Group-Level Effects: 
## ~subject (Number of levels: 25) 
##               Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept)     0.38      0.06     0.28     0.52 1.00     1290     2586
## 
## Population-Level Effects: 
##        Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## wgCM      -0.48      0.09    -0.66    -0.30 1.01      560     1654
## wgCMM     -0.12      0.09    -0.29     0.06 1.01      555     1660
## wgP2      -0.94      0.09    -1.11    -0.77 1.01      556     1654
## wgP3      -0.81      0.09    -0.99    -0.63 1.02      554     1589
## wgP31M    -0.43      0.09    -0.60    -0.25 1.01      524     1458
## wgP3M1    -0.14      0.09    -0.31     0.04 1.01      552     1687
## wgP4      -0.48      0.09    -0.65    -0.30 1.02      540     1760
## wgP4G     -0.25      0.09    -0.42    -0.07 1.01      564     1768
## wgP4M      0.16      0.09    -0.01     0.33 1.02      538     1631
## wgP6      -0.48      0.09    -0.65    -0.30 1.02      563     1473
## wgP6M     -0.04      0.09    -0.21     0.14 1.02      555     1303
## wgPG      -0.93      0.09    -1.10    -0.75 1.01      559     1587
## wgPGG     -0.72      0.09    -0.89    -0.54 1.02      547     1625
## wgPM      -0.59      0.09    -0.76    -0.41 1.01      557     1666
## wgPMG     -0.25      0.09    -0.42    -0.08 1.01      585     1654
## wgPMM     -0.06      0.09    -0.24     0.11 1.02      544     1702
## 
## Family Specific Parameters: 
##       Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma     0.23      0.01     0.21     0.25 1.00     9736    12158
## 
## Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

We will now look at the model’s predicts for the average participant (i.e, ignoring the random intercepts).

The density plot shows the distribution of the empirical data, while the blue line shows the 66% and 95% prediction intervals.

Figure 2.2: The density plot shows the distribution of the empirical data, while the blue line shows the 66% and 95% prediction intervals.

2.2.2 Fit Model to Psychophysical Data

We will now fit the model to the data.

##  Family: lognormal 
##   Links: mu = identity; sigma = identity 
## Formula: threshold ~ wg - 1 + (1 | subject) 
##    Data: d_dispthresh (Number of observations: 186) 
## Samples: 4 chains, each with iter = 10000; warmup = 5000; thin = 1;
##          total post-warmup samples = 20000
## 
## Group-Level Effects: 
## ~subject (Number of levels: 12) 
##               Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept)     0.39      0.11     0.23     0.65 1.00     3693     5875
## 
## Population-Level Effects: 
##        Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## wgCM      -0.22      0.17    -0.53     0.11 1.00     2997     5074
## wgCMM     -1.15      0.17    -1.47    -0.81 1.00     2897     4728
## wgP2      -0.29      0.17    -0.61     0.05 1.00     2947     4699
## wgP3      -0.68      0.17    -1.00    -0.34 1.00     2847     4477
## wgP31M    -1.18      0.17    -1.50    -0.84 1.00     2916     4727
## wgP3M1    -1.34      0.16    -1.66    -1.02 1.00     3039     5170
## wgP4      -0.89      0.17    -1.20    -0.55 1.00     2924     4759
## wgP4G     -1.22      0.17    -1.54    -0.87 1.00     3036     4954
## wgP4M     -1.29      0.17    -1.60    -0.95 1.00     2878     4328
## wgP6      -1.20      0.17    -1.53    -0.86 1.00     3018     5273
## wgP6M     -1.41      0.17    -1.75    -1.07 1.00     2956     5074
## wgPG       0.36      0.17     0.04     0.70 1.00     2957     4847
## wgPGG     -0.31      0.17    -0.62     0.03 1.00     2867     4717
## wgPM      -0.79      0.17    -1.11    -0.44 1.00     3054     4636
## wgPMG     -0.96      0.17    -1.28    -0.62 1.00     2772     4770
## wgPMM     -1.17      0.17    -1.49    -0.84 1.00     3049     4827
## 
## Family Specific Parameters: 
##       Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma     0.41      0.02     0.37     0.46 1.00    13006    12477
## 
## Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

2.2.3 EEG Control Data

We will also fit models to the control data. As we can see from Figure 2.4, the group differences are much smaller.

The density plot shows the distribution of the empirical data, while the blue line shows the 66% and 95% prediction intervals.

Figure 2.3: The density plot shows the distribution of the empirical data, while the blue line shows the 66% and 95% prediction intervals.

3 Subgroup Comparisons

We will now compute the difference between sub- and super-groups.

3.1 Primary EEG Data

Finally, we calculate the probability that the RMS difference between subgroup and super-group is larger than zero given the data. This information is then binned so we can colour in the posterior density plots.

Distributions of the difference in mean log(rms) between sub- and super-groups. The index of each relationship is indicated by the colour of the y-axis label. The fill of the density plots indicated the probability of the difference being greater than zero.

Figure 3.1: Distributions of the difference in mean log(rms) between sub- and super-groups. The index of each relationship is indicated by the colour of the y-axis label. The fill of the density plots indicated the probability of the difference being greater than zero.

3.2 Psychophysical Data

We can do the same for the display duration thresholds from our psychophysics experiment. Here we are looking for the opposite effect, namely that display larger are larger for subgroups than for supergroups (see main paper), so we calculate the probability that differences in duration are smaller than zero.

Distributions of the difference in mean log display duration threshold between sub- and super-groups. The index of each relationship is indicated by the colour of the y-axis label. The fill of the density plots indicated the probability of the difference being less than zero.

Figure 3.2: Distributions of the difference in mean log display duration threshold between sub- and super-groups. The index of each relationship is indicated by the colour of the y-axis label. The fill of the density plots indicated the probability of the difference being less than zero.

3.3 Control EEG Data

We will now do exactly the same with the control data (odd harmonic data from parietal electrodes and even harmonic data from occipital electrodes)

Distributions of the difference in mean log(rms) between sub- and super-groups. The index of each relationship is indicated by the colour of the y-axis label. The fill of the density plots indicated the probability of the difference being greater than zero.

Figure 3.3: Distributions of the difference in mean log(rms) between sub- and super-groups. The index of each relationship is indicated by the colour of the y-axis label. The fill of the density plots indicated the probability of the difference being greater than zero.

Distributions of the difference in mean log(rms) between sub- and super-groups. The index of each relationship is indicated by the colour of the y-axis label. The fill of the density plots indicated the probability of the difference being greater than zero.

Figure 3.4: Distributions of the difference in mean log(rms) between sub- and super-groups. The index of each relationship is indicated by the colour of the y-axis label. The fill of the density plots indicated the probability of the difference being greater than zero.

3.4 Summary

We can summarise the subgroup comparison plots above by plotting ROC curves for each of our four measurements (Figure 3.5).

eeg threshold occ_even par_odd p
56 48 32 22 0.95
This figure shows how many of our 63 comparisons are classed as having a greater-than-zero difference (less-than-zero for the display durations) for difference thresholds. between 0.5 an 1.0.

Figure 3.5: This figure shows how many of our 63 comparisons are classed as having a greater-than-zero difference (less-than-zero for the display durations) for difference thresholds. between 0.5 an 1.0.

If we take \(p\)=0.95 as our cut-off, we can see that the subgroup relations are preserved in 56/63 = 89% and 49/64 = 78% of the comparisons for the primary EEG and display durations repectively. This compares to the 32/64= 50% and 22/64 = 35% for the control EEG conditions.

4 Additional Analysis

4.1 Replication of Kohler et al (2016)

We can look at the groups that only contain rotations, and see if we obtain the parametric response as documented in Kohler et al. (2016).

##  Family: lognormal 
##   Links: mu = identity; sigma = identity 
## Formula: rms ~ rotation + (1 | subject) 
##    Data: d_eeg_rot (Number of observations: 100) 
## Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
##          total post-warmup samples = 4000
## 
## Group-Level Effects: 
## ~subject (Number of levels: 25) 
##               Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept)     0.39      0.07     0.27     0.54 1.01      984     1299
## 
## Population-Level Effects: 
##           Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept    -1.15      0.11    -1.36    -0.94 1.00     1038     1757
## rotation      0.12      0.02     0.09     0.16 1.00     6242     2944
## 
## Family Specific Parameters: 
##       Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma     0.27      0.02     0.23     0.32 1.00     3014     3165
## 
## Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

We will also investigate if we see the corresponding pattern with the display duration threshold data, with the time taken to detect the symmetry dreasing as we increase the amount of rotational symmetry.

##  Family: lognormal 
##   Links: mu = identity; sigma = identity 
## Formula: threshold ~ rotation + (1 | subject) 
##    Data: d_disp_rot (Number of observations: 46) 
## Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
##          total post-warmup samples = 4000
## 
## Group-Level Effects: 
## ~subject (Number of levels: 12) 
##               Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept)     0.48      0.17     0.18     0.87 1.00     1066     1461
## 
## Population-Level Effects: 
##           Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept    -0.00      0.24    -0.47     0.48 1.00     2020     2310
## rotation     -0.22      0.05    -0.31    -0.12 1.00     3833     2481
## 
## Family Specific Parameters: 
##       Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma     0.47      0.06     0.36     0.62 1.00     2263     1904
## 
## Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
Red lines show empirical data, blue lines show the model fit.

Figure 4.1: Red lines show empirical data, blue lines show the model fit.

4.2 Index and Normality

Subgroup relations can be classified by their index, and by whether they are normal or not. Here we investigate the extent to which these two variables can account for the variation between the subgroup relationships.

First of all, we run for the eeg rms data.

##  Family: gaussian 
##   Links: mu = identity; sigma = identity 
## Formula: mean_value ~ index * normal 
##    Data: comp_summary$eeg (Number of observations: 63) 
## Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
##          total post-warmup samples = 4000
## 
## Population-Level Effects: 
##              Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept        0.12      0.14    -0.16     0.40 1.00     2346     2695
## index            0.05      0.02     0.01     0.10 1.00     2469     2658
## normal           0.04      0.18    -0.31     0.39 1.00     1803     2367
## index:normal     0.07      0.05    -0.02     0.16 1.00     1935     2643
## 
## Family Specific Parameters: 
##       Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma     0.27      0.02     0.22     0.32 1.00     3103     2568
## 
## Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

And now for the display duration thresholds.

##  Family: gaussian 
##   Links: mu = identity; sigma = identity 
## Formula: mean_value ~ index * normal 
##    Data: comp_summary$threshold (Number of observations: 63) 
## Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
##          total post-warmup samples = 4000
## 
## Population-Level Effects: 
##              Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept       -0.59      0.23    -1.05    -0.14 1.00     2382     2894
## index           -0.05      0.04    -0.13     0.02 1.00     2386     2744
## normal           0.45      0.30    -0.13     1.04 1.00     2022     2321
## index:normal    -0.13      0.08    -0.28     0.03 1.00     2205     2459
## 
## Family Specific Parameters: 
##       Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma     0.44      0.04     0.37     0.53 1.00     2634     2505
## 
## Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

It is unclear from the above summary tables whether either variable has a clear effect. We can use leave-one-out metrics to compare models.

## 
## Call:
## lm(formula = mean_value ~ index * normal, data = comp_summary$threshold)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -0.8215 -0.2812 -0.0115  0.3575  0.8305 
## 
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)  
## (Intercept)  -0.59783    0.23023  -2.597   0.0119 *
## index        -0.05106    0.03719  -1.373   0.1750  
## normal        0.46807    0.29571   1.583   0.1188  
## index:normal -0.13610    0.07604  -1.790   0.0786 .
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.4286 on 59 degrees of freedom
## Multiple R-squared:  0.2083, Adjusted R-squared:  0.168 
## F-statistic: 5.174 on 3 and 59 DF,  p-value: 0.003055
## 
## Call:
## lm(formula = mean_value ~ index * normal, data = comp_summary$eeg)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -0.83338 -0.18304  0.00269  0.17946  0.52589 
## 
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)  
## (Intercept)   0.12564    0.14074   0.893   0.3756  
## index         0.05404    0.02274   2.377   0.0207 *
## normal        0.03715    0.18077   0.206   0.8379  
## index:normal  0.06695    0.04648   1.440   0.1551  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.262 on 59 degrees of freedom
## Multiple R-squared:  0.2026, Adjusted R-squared:  0.1621 
## F-statistic: 4.998 on 3 and 59 DF,  p-value: 0.003721

We can see that the index of the subgroup relationship has an effect on both the difference in log(rms) and the difference in log(display duration): relationships with a higher index lead to larger differences.

The effect of index and normality on log(rms) and log(ms).

Figure 4.2: The effect of index and normality on log(rms) and log(ms).

4.3 Correlation Between Primary EEG data and Psychophysical Thresholds

Finally, we will investigate whether there is a correlation between the our primary EEG measure (rms amplitude of odd harmonics over occipital cortex) and our display duration thresholds. As our two different measures come from different samples of participants, we are unable to do a direct comparison. However, we can use the results of the models discussed in Section 3 and check for a correlation between the predicted values of the two measures.

##     Estimate  Est.Error      Q2.5     Q97.5
## R2 0.4370685 0.07323351 0.2785228 0.5562492

We can see that although the correlation is relatively weak, our confidence interval indicates that we can be reasonably positive that \(R^2>0\) (i.e, 95% credible interval is 0.28 - 0.56).

Scatter plot showing the correlation between our two measures. Each line is a sample from the posterior of a Bayesian linear regression.

Figure 4.3: Scatter plot showing the correlation between our two measures. Each line is a sample from the posterior of a Bayesian linear regression.

5 Package and Session Information

Details of packages, etc, are given below.

## R version 3.6.1 (2019-07-05)
## Platform: x86_64-apple-darwin15.6.0 (64-bit)
## Running under: macOS Mojave 10.14.6
## 
## Matrix products: default
## BLAS:   /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRblas.0.dylib
## LAPACK: /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRlapack.dylib
## 
## locale:
## [1] en_GB.UTF-8/en_GB.UTF-8/en_GB.UTF-8/C/en_GB.UTF-8/en_GB.UTF-8
## 
## attached base packages:
## [1] stats     graphics  grDevices utils     datasets  methods   base     
## 
## other attached packages:
##  [1] magick_2.3      patchwork_1.0.0 ggthemes_4.2.0  see_0.2.1      
##  [5] latex2exp_0.4.0 ggridges_0.5.1  tidybayes_2.0.2 brms_2.12.0    
##  [9] Rcpp_1.0.5      forcats_0.4.0   stringr_1.4.0   dplyr_1.0.0    
## [13] purrr_0.3.4     readr_1.3.1     tidyr_1.1.0     tibble_3.0.1   
## [17] ggplot2_3.3.2   tidyverse_1.2.1 bookdown_0.18  
## 
## loaded via a namespace (and not attached):
##   [1] colorspace_1.4-1          ellipsis_0.3.1           
##   [3] rsconnect_0.8.15          ggstance_0.3.3           
##   [5] markdown_1.1              base64enc_0.1-3          
##   [7] rstudioapi_0.11           farver_2.0.3             
##   [9] rstan_2.19.3              svUnit_0.7-12            
##  [11] DT_0.14                   fansi_0.4.1              
##  [13] lubridate_1.7.4           xml2_1.2.2               
##  [15] bridgesampling_0.7-2      knitr_1.25               
##  [17] shinythemes_1.1.2         bayesplot_1.7.0          
##  [19] jsonlite_1.7.0            broom_0.5.2              
##  [21] shiny_1.3.2               compiler_3.6.1           
##  [23] httr_1.4.1                backports_1.1.8          
##  [25] assertthat_0.2.1          Matrix_1.2-17            
##  [27] cli_2.0.2                 later_0.8.0              
##  [29] htmltools_0.3.6           prettyunits_1.1.1        
##  [31] tools_3.6.1               igraph_1.2.4.1           
##  [33] coda_0.19-3               gtable_0.3.0             
##  [35] glue_1.4.1                reshape2_1.4.3           
##  [37] cellranger_1.1.0          vctrs_0.3.1              
##  [39] nlme_3.1-140              crosstalk_1.0.0          
##  [41] insight_0.5.0             xfun_0.8                 
##  [43] ps_1.3.3                  rvest_0.3.4              
##  [45] mime_0.7                  miniUI_0.1.1.1           
##  [47] lifecycle_0.2.0           gtools_3.8.1             
##  [49] zoo_1.8-6                 scales_1.1.1             
##  [51] colourpicker_1.0          hms_0.5.0                
##  [53] promises_1.0.1            Brobdingnag_1.2-6        
##  [55] parallel_3.6.1            inline_0.3.15            
##  [57] shinystan_2.5.0           yaml_2.2.0               
##  [59] gridExtra_2.3             loo_2.3.0                
##  [61] StanHeaders_2.21.0-5      stringi_1.4.6            
##  [63] highr_0.8                 bayestestR_0.3.0         
##  [65] dygraphs_1.1.1.6          pkgbuild_1.0.8           
##  [67] rlang_0.4.6               pkgconfig_2.0.3          
##  [69] matrixStats_0.56.0        evaluate_0.14            
##  [71] lattice_0.20-38           labeling_0.3             
##  [73] rstantools_2.0.0          htmlwidgets_1.3          
##  [75] cowplot_1.0.0             tidyselect_1.1.0         
##  [77] processx_3.4.3            plyr_1.8.5               
##  [79] magrittr_1.5              R6_2.4.1                 
##  [81] generics_0.0.2            pillar_1.4.4             
##  [83] haven_2.1.1               withr_2.2.0              
##  [85] xts_0.11-2                abind_1.4-5              
##  [87] modelr_0.1.5              crayon_1.3.4             
##  [89] arrayhelpers_1.0-20160527 utf8_1.1.4               
##  [91] rmarkdown_2.1             grid_3.6.1               
##  [93] readxl_1.3.1              callr_3.4.3              
##  [95] threejs_0.3.1             webshot_0.5.1            
##  [97] digest_0.6.25             xtable_1.8-4             
##  [99] httpuv_1.5.1              RcppParallel_5.0.2       
## [101] stats4_3.6.1              munsell_0.5.0            
## [103] viridisLite_0.3.0         kableExtra_1.1.0         
## [105] shinyjs_1.0